Goto

Collaborating Authors

 twentieth century


What Do AI-Generated Images Want?

Wasielewski, Amanda

arXiv.org Artificial Intelligence

W.J.T. Mitchell's influential essay 'What do pictures want?' shifts the theoretical focus away from the interpretative act of understanding pictures and from the motivations of the humans who create them to the possibility that the picture itself is an entity with agency and wants. In this article, I reframe Mitchell's question in light of contemporary AI image generation tools to ask: what do AI-generated images want? Drawing from art historical discourse on the nature of abstraction, I argue that AI-generated images want specificity and concreteness because they are fundamentally abstract. Multimodal text-to-image models, which are the primary subject of this article, are based on the premise that text and image are interchangeable or exchangeable tokens and that there is a commensurability between them, at least as represented mathematically in data. The user pipeline that sees textual input become visual output, however, obscures this representational regress and makes it seem like one form transforms into the other -- as if by magic.



A Methodology for Studying Linguistic and Cultural Change in China, 1900-1950

Stewart, Spencer Dean

arXiv.org Artificial Intelligence

This paper presents a quantitative approach to studying linguistic and cultural change in China during the first half of the twentieth century, a period that remains understudied in computational humanities research. The dramatic changes in Chinese language and culture during this time call for greater reflection on the tools and methods used for text analysis. This preliminary study offers a framework for analyzing Chinese texts from the late nineteenth and twentieth centuries, demonstrating how established methods such as word counts and word embeddings can provide new historical insights into the complex negotiations between Western modernity and Chinese cultural discourse.


Variation of sentence length across time and genre

Rudnicka, Karolina

arXiv.org Artificial Intelligence

The goal of this paper is threefold: i) to present some practical aspects of using full-text version of Corpus of Historical American English (COHA), the largest diachronic multi-genre corpus of the English language, in the investigation of a linguistic trend of change; ii) to test a widely held assumption that sentence length in written English has been steadily decreasing over the past few centuries; iii) to point to a possible link between the changes in sentence length and changes in the English syntactic usage. The empirical proof of concept for iii) is provided by the decline in the frequency of the non-finite purpose subordinator in order to. Sentence length, genre and the likelihood of occurrence of in order to are shown to be interrelated.


What Is Noise?

The New Yorker

"Noise" is a fuzzy word--a noisy one, in the statistical sense. Its meanings run the gamut from the negative to the positive, from the overpowering to the mysterious, from anarchy to sublimity. The negative seems to lie at the root: etymologists trace the word to "nuisance" and "nausea." Noise is what drives us mad; it sends the Grinch over the edge at Christmastime. ("Oh, the Noise! Noise!") Noise is the sound of madness itself, the din within our minds. The demented narrator of Poe's "The Tell-Tale Heart" jabbers about noise while he hallucinates his victim's heartbeat: "I found that the noise was not within my ears. . . . The noise steadily increased. . . . Yet noise can be righteous and majestic. The Psalms are full of joyful noise, noise unto the Lord. In the Book of Ezekiel, the voice of God is said to be "like a noise of many waters." In "Paradise Lost," Heaven makes "infernal noise" as it beats back the armies of Hell. At the same time, the word can summon all manner of ...


S\={a}mayik: A Benchmark and Dataset for English-Sanskrit Translation

Maheshwari, Ayush, Gupta, Ashim, Krishna, Amrith, Ramakrishnan, Ganesh, Kumar, G. Anil, Singla, Jitin

arXiv.org Artificial Intelligence

Sanskrit is a low-resource language with a rich heritage. Digitized Sanskrit corpora reflective of the contemporary usage of Sanskrit, specifically that too in prose, is heavily under-represented at present. Presently, no such English-Sanskrit parallel dataset is publicly available. We release a dataset, S\={a}mayik, of more than 42,000 parallel English-Sanskrit sentences, from four different corpora that aim to bridge this gap. Moreover, we also release benchmarks adapted from existing multilingual pretrained models for Sanskrit-English translation. We include training splits from our contemporary dataset and the Sanskrit-English parallel sentences from the training split of Itih\={a}sa, a previously released classical era machine translation dataset containing Sanskrit.


People Over Robots: The Global Economy Needs Immigration Before Automation

#artificialintelligence

We live in a technological age--or so we are told. Machines promise to transform every facet of human life: robots will staff factory floors, driverless cars will rule the road, and artificial intelligence will govern weapons systems. Politicians and analysts fret over the consequences of such advances, worrying about the damage that will be done to industries and individuals. Governments, they argue, must help manage the costs of progress. These conversations almost always treat technological change as something to be adapted to, as if it were a force of nature, barreling inexorably into the staid conventions and assumptions of modern life. The pace of change seems irrepressible; new technologies will remake societies. All people can do is figure out how best to cope. Nowhere is this outlook more apparent than in the discussion of automation and its impact on jobs. My local grocery store in rural Utah has hung, with no apparent sense of irony, a sign proclaiming the company's support for U.S. workers above a self-checkout machine, a device that uses technology to replace the labor of an employee with the labor of the customer.


The World-Changing Race to Develop the Quantum Computer

The New Yorker

This content can also be viewed on the site it originates from. On the outskirts of Santa Barbara, California, between the orchards and the ocean, sits an inconspicuous warehouse, its windows tinted brown and its exterior painted a dull gray. The facility has almost no signage, and its name doesn't appear on Google Maps. A small label on the door reads "Google AI Quantum." Inside, the computer is being reinvented from scratch.


AI's Future Doesn't Have to Be Dystopian - Boston Review

#artificialintelligence

Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society. Worse still, the weakening of democracy makes formulating solutions to the adverse labor market and distributional effects of AI much more difficult. These dangers have only multiplied during the COVID-19 crisis. Lockdowns, social distancing, and workers' vulnerability to the virus have given an additional boost to the drive for automation, with the majority of U.S. businesses reporting plans for more automation.


Hitting the Books: Is the hunt for technological supremacy harming our collective humanity?

Engadget

Stand aside humanity, you're holding up the progress. We've passed the point of usefulness for Homo sapiens, now is the dawning of the Homo Faber era. The idea that "I think therefore I am" has become quaint in this new age of builders and creators. But has our continued obsession with technology and progress actually managed to instead set back our capacity for humanity? In his new book, The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do, author and pioneering researcher in the field of natural language processing, Erik J Larson, investigates the efforts to build computers that process information like we do and why we're much farther away from having human-equivalent AIs than most futurists would care to admit.